Randomness and Computational Complexity 🤖

:: computational complexity, randomness, physics

By: Ben Warren

TL;DR: Randomness is a mathematical device, not a physical phenomenon. Randomness lets us trade computational complexity for uncertainty.

You flip a coin. Should you call heads or tails? In probability class, we’d say that the outcome of the coin toss is a random variable - it will be HEADS with probability 1/2, and TAILS with probability 1/2. But zoom in. Was that coin toss really random? Imagine I set up a sensor array that lets me measure the precise direction and force with which the coin is tossed, the air density, and the geometry of the surface that the coin will land on. If I feed that data into a computer model, it’s safe to say that I can get a slight edge in predicting the outcome of that coin toss - maybe 60/40, instead of 50/50. In fact, as far as we know, the rules of physics are deterministic; if I knew all of the details of the initial condition of the coin toss and I had infinite compute at my disposal, I could compute with certainty the side on which the coin would land. Think of any other place we frequently talk about randomness. The stock market? Those prices might look random, but ultimately they are driven by individual human decisions - fundamentally, there is no random number generator behind the scenes. The motion of particles in a gas? If we have the patience, we can model all of the forces and simulate the motion of individual particles step-by-step, as molecular dynamics programs do. Even “true” random number generators rely on deterministic physical phenomena.

So, what is “randomness”? It’s a mathematical abstraction that gives an exponential-time-speedup over computing the true outcome. Rather than follow the molecular dynamics of the gas particle to find its exact position at timestep 10,000, I just pick random coordinates in the holding vessel. However, notice the tradeoff - we are reducing computational complexity by introducing uncertainty. For an individual event - like the flip of a coin or the motion of a gas particle, a model with randomness cannot give the “true” answer. The way we get back to a “true” answer is by reducing our model’s resolution. Instead of asking about the motion of an individual gas particle, ask about the aggregate behavior of a system of gas particles in a container; if we use random variables with the same distribution of outcomes as the actual events, our statistical conclusions will be valid. The pattern is: introduce randomness, then jump up to a higher level of abstraction.

Introducing randomness is necessary for understanding any physical system. All current physical theories (Newtonian mechanics, for instance) implicitly admit microscale uncertainty. This makes sense, because if the physical universe is, indeed, deterministic and at least as powerful as a Turing machine, then no computer that we build to simulate the universe will be able to run faster than the universe itself, because our simulating computer must be built out of the stuff of the physical universe. Therefore, any prediction of the future cannot be precise at a maximally fine-grained resolution - in order to compute faster than the universe itself, we have to take shortcuts, grabbing exponential-time speedups by reducing our resolution - such as by constraining our scope and introducing randomness.

For an interesting, non-consensus theory of physics with the point-of-view of everything-as-computation, check out the Wolfram Physics Project.